图形神经网络(GNN)已成为一种学习关系数据的强大技术。由于他们执行的消息传递步骤数量相对有限 - 因此一个较小的接收领域,人们对通过结合基础图的结构方面来提高其表现力引起了极大的兴趣。在本文中,我们探讨了亲和力措施作为图形神经网络中的特征,特别是由随机步行引起的措施,包括有效的阻力,击球和通勤时间。我们根据这些功能提出消息传递网络,并评估其在各种节点和图形属性预测任务上的性能。我们的体系结构具有较低的计算复杂性,而我们的功能对于基础图的排列不变。我们计算的措施使网络可以利用图表的连接性能,从而使我们能够超过相关的基准,用于各种任务,通常具有更少的消息传递步骤。在OGB-LSC-PCQM4MV1的最大公共图形回归数据集之一中,我们在编写时获得了最著名的单模验证MAE。
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
该报告说明了基于音频和视频数据的最成功的AAL应用程序和功能的艺术状态,即(i)生命式和自我监控,(ii)对生命体征的远程监控,(iii)情绪状态识别,((iv)食物摄入量监测,活动和行为认识,(v)活动和个人帮助,(vi)手势识别,(vii)秋季检测和预防,(viii)移动性评估和脆弱的识别以及(IX)认知和运动康复。对于这些应用程序方案,该报告说明了科学进步,可用产品和研究项目的状态。开放的挑战也被突出显示。
translated by 谷歌翻译
在这项工作中,我们对情感和压力环境中的文本独立扬声器验证性能进行了实证对比研究。这项工作结合了浅架构的深层模型,导致新的混合分类器。利用了四种不同的混合模型:深神经网络隐藏式马尔可夫模型(DNN-HMM),深神经网络 - 高斯混合模型(DNN-GMM),高斯混合模型 - 深神经网络(GMM-DNN)和隐藏的马尔可夫模型-Deep神经网络(HMM-DNN)。所有模型都基于新颖的实施架构。比较研究使用了三个不同的语音数据集:私人阿拉伯数据集和两个公共英语数据库,即在模拟和实际压力下的演讲(Susas)和情感语音和歌曲(Ravdess)的ryerson视听数据库。上述混合模型的测试结果表明,所提出的HMM-DNN利用情绪和压力环境中的验证性能。结果还表明,HMM-DNN在曲线(AUC)评估度量下的相同错误率(eer)和面积方面优于所有其他混合模型。基于三个数据集的平均所产生的验证系统分别基于HMM-DNN,DNN-HMM,DNN-GMM和GMM-DNN产生7.19%,16.85%,11.51%和11.90%的eERs。此外,我们发现,与两个谈话环境中的所有其他混合模型相比,DNN-GMM模型展示了最少的计算复杂性。相反,HMM-DNN模型需要最多的培训时间。调查结果还证明了EER和AUC值在比较平均情绪和压力表演时依赖于数据库。
translated by 谷歌翻译
最近的语音情绪识别分析与使用MFCCS频谱图特征和实现诸如卷积神经网络(CNNS)的神经网络方法的实施进行了相当大的进展。胶囊网络(CAPSNET)对CNN的替代品感谢其具有较大容量的分层表示。为了解决这些问题,本研究介绍了独立于文本和独立的讲话者独立的SER新颖体系结构,其中基于结构特征提出了双通道长短短期内存压缩帽(DC-LSTM Compsnet)算法Capsnet。我们所提出的新型分类器可以确保语音情感识别中模型和足够的压缩方法的能效,这不会通过彩铃的原始结构提供。此外,网格搜索方法用于获得最佳解决方案。结果目睹了培训和测试运行时间的性能和减少。用于评估我们的算法的语音数据集是:阿拉伯语Emirati-Egrented语料库,模拟和实际压力语料库下的英语演讲,情感语音和歌曲语料库的英语Ryerson Audio-Visual数据库,以及人群源性情绪多模式演员数据集。这项工作揭示了与其他已知方法相比的最佳特征提取方法是MFCCS Delta-Delta。使用四个数据集和MFCCS Delta-Delta,DC-LSTM CompsNet超越了所有最先进的系统,古典分类器,CNN和原始帽。我们的结果表明,基于Capsnet的拟议工作产生了89.3%的平均情绪识别准确性,其结果表明,拟议的工作产生了89.3%的89.3%。 CNN,支持向量机,多层Perceptron,K-最近邻居,径向基函数和幼稚贝叶斯。
translated by 谷歌翻译
荆棘冠的海星(婴儿床)爆发是珊瑚损失的主要原因是巨大的障碍礁(GBR),并且正在进行大量监测和控制计划,以试图管理生态可持续水平的COTS群体。我们释放了GBR上的COTS爆发区域的大规模注释的水下图像数据集,以鼓励机器学习和AI驱动技术的研究,以改善珊瑚礁秤上的COTS群体的检测,监测和管理。该数据集发布并托管在一次竞争中,挑战国际机器学习界,并从这些水下图像中的COTS检测的任务挑战。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
Object detectors are conventionally trained by a weighted sum of classification and localization losses. Recent studies (e.g., predicting IoU with an auxiliary head, Generalized Focal Loss, Rank & Sort Loss) have shown that forcing these two loss terms to interact with each other in non-conventional ways creates a useful inductive bias and improves performance. Inspired by these works, we focus on the correlation between classification and localization and make two main contributions: (i) We provide an analysis about the effects of correlation between classification and localization tasks in object detectors. We identify why correlation affects the performance of various NMS-based and NMS-free detectors, and we devise measures to evaluate the effect of correlation and use them to analyze common detectors. (ii) Motivated by our observations, e.g., that NMS-free detectors can also benefit from correlation, we propose Correlation Loss, a novel plug-in loss function that improves the performance of various object detectors by directly optimizing correlation coefficients: E.g., Correlation Loss on Sparse R-CNN, an NMS-free method, yields 1.6 AP gain on COCO and 1.8 AP gain on Cityscapes dataset. Our best model on Sparse R-CNN reaches 51.0 AP without test-time augmentation on COCO test-dev, reaching state-of-the-art. Code is available at https://github.com/fehmikahraman/CorrLoss
translated by 谷歌翻译
In this paper, we reduce the complexity of approximating the correlation clustering problem from $O(m\times\left( 2+ \alpha (G) \right)+n)$ to $O(m+n)$ for any given value of $\varepsilon$ for a complete signed graph with $n$ vertices and $m$ positive edges where $\alpha(G)$ is the arboricity of the graph. Our approach gives the same output as the original algorithm and makes it possible to implement the algorithm in a full dynamic setting where edge sign flipping and vertex addition/removal are allowed. Constructing this index costs $O(m)$ memory and $O(m\times\alpha(G))$ time. We also studied the structural properties of the non-agreement measure used in the approximation algorithm. The theoretical results are accompanied by a full set of experiments concerning seven real-world graphs. These results shows superiority of our index-based algorithm to the non-index one by a decrease of %34 in time on average.
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译